35 research outputs found
Active Image-based Modeling with a Toy Drone
Image-based modeling techniques can now generate photo-realistic 3D models
from images. But it is up to users to provide high quality images with good
coverage and view overlap, which makes the data capturing process tedious and
time consuming. We seek to automate data capturing for image-based modeling.
The core of our system is an iterative linear method to solve the multi-view
stereo (MVS) problem quickly and plan the Next-Best-View (NBV) effectively. Our
fast MVS algorithm enables online model reconstruction and quality assessment
to determine the NBVs on the fly. We test our system with a toy unmanned aerial
vehicle (UAV) in simulated, indoor and outdoor experiments. Results show that
our system improves the efficiency of data acquisition and ensures the
completeness of the final model.Comment: To be published on International Conference on Robotics and
Automation 2018, Brisbane, Australia. Project Page:
https://huangrui815.github.io/active-image-based-modeling/ The author's
personal page: http://www.sfu.ca/~rha55
Entanglement witness and entropy uncertainty of open Quantum systems under Zeno effect
The entanglement witness and the entropy uncertainty are investigated by
using the pseudomode theory for the open two-atom system under the quantum Zeno
effect. The results show that, only when the two spectrums satisfy strong
coupling with the atom, the time of entanglement witness can be prolonged and
the lower bound of the entropic uncertainty can be reduced, and the
entanglement can be witnessed many times. We also gave the corresponding
physical explanation by the non-Markovianity. The Zeno effect not only can very
effectively prolong the time of entanglement witness and reduce the lower bound
of the entropy uncertainty, but also can greatly enhance the time of
entanglement witness and reduce the entanglement value of witness
Sky-GVINS: a Sky-segmentation Aided GNSS-Visual-Inertial System for Robust Navigation in Urban Canyons
Integrating Global Navigation Satellite Systems (GNSS) in Simultaneous
Localization and Mapping (SLAM) systems draws increasing attention to a global
and continuous localization solution. Nonetheless, in dense urban environments,
GNSS-based SLAM systems will suffer from the Non-Line-Of-Sight (NLOS)
measurements, which might lead to a sharp deterioration in localization
results. In this paper, we propose to detect the sky area from the up-looking
camera to improve GNSS measurement reliability for more accurate position
estimation. We present Sky-GVINS: a sky-aware GNSS-Visual-Inertial system based
on a recent work called GVINS. Specifically, we adopt a global threshold method
to segment the sky regions and non-sky regions in the fish-eye sky-pointing
image and then project satellites to the image using the geometric relationship
between satellites and the camera. After that, we reject satellites in non-sky
regions to eliminate NLOS signals. We investigated various segmentation
algorithms for sky detection and found that the Otsu algorithm reported the
highest classification rate and computational efficiency, despite the
algorithm's simplicity and ease of implementation. To evaluate the
effectiveness of Sky-GVINS, we built a ground robot and conducted extensive
real-world experiments on campus. Experimental results show that our method
improves localization accuracy in both open areas and dense urban environments
compared to the baseline method. Finally, we also conduct a detailed analysis
and point out possible further directions for future research. For detailed
information, visit our project website at
https://github.com/SJTU-ViSYS/Sky-GVINS
StructVIO : Visual-inertial Odometry with Structural Regularity of Man-made Environments
We propose a novel visual-inertial odometry approach that adopts structural
regularity in man-made environments. Instead of using Manhattan world
assumption, we use Atlanta world model to describe such regularity. An Atlanta
world is a world that contains multiple local Manhattan worlds with different
heading directions. Each local Manhattan world is detected on-the-fly, and
their headings are gradually refined by the state estimator when new
observations are coming. With fully exploration of structural lines that
aligned with each local Manhattan worlds, our visual-inertial odometry method
become more accurate and robust, as well as much more flexible to different
kinds of complex man-made environments. Through extensive benchmark tests and
real-world tests, the results show that the proposed approach outperforms
existing visual-inertial systems in large-scale man-made environmentsComment: 15 pages,15 figure